首页> 外文OA文献 >Design and implementation of high-performance memory systems for future packet buffers
【2h】

Design and implementation of high-performance memory systems for future packet buffers

机译:为未来的数据包缓冲区设计和实现高性能存储器系统

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this paper, we address the design of a future high-speed router that supports line rates as high as OC-3072 (160 Gb/s), around one hundred ports and several service classes. Building such a high-speed router would raise many technological problems, one of them being the packet buffer design, mainly because in router design it is important to provide worst-case bandwidth guarantees and not just average-case optimizations. A previous packet buffer design provides worst-case bandwidth guarantees by using a hybrid SRAM/DRAM approach. Next-generation routers need to support hundreds of interfaces (i.e., ports and service classes). Unfortunately, high bandwidth for hundreds of interfaces requires the previous design to use large SRAMs which become a bandwidth bottleneck. The key observation we make is that the SRAM size is proportional to the DRAM access time but we can reduce the effective DRAM access time by overlapping multiple accesses to different banks, allowing us to reduce the SRAM size. The key challenge is that to keep the worst-case bandwidth guarantees, we need to guarantee that there are no bank conflicts while the accesses are in flight. We guarantee bank conflicts by reordering the DRAM requests using a modern issue-queue-like mechanism. Because our design may lead to fragmentation of memory across packet buffer queues, we propose to share the DRAM space among multiple queues by renaming the queue slots. To the best of our knowledge, the design proposed in this paper is the fastest buffer design using commodity DRAM to be published to date.
机译:在本文中,我们着眼于未来的高速路由器的设计,该路由器将支持高达OC-3072(160 Gb / s)的线速,大约一百个端口和几种服务等级。构建这样的高速路由器会引发许多技术问题,其中之一就是数据包缓冲区设计,这主要是因为在路由器设计中,提供最坏情况的带宽保证而不是平均情况下的优化很重要。先前的数据包缓冲区设计通过使用混合SRAM / DRAM方法提供了最坏情况的带宽保证。下一代路由器需要支持数百个接口(即端口和服务类别)。不幸的是,数百个接口的高带宽要求以前的设计使用大型SRAM,这成为带宽瓶颈。我们的主要观察结果是,SRAM的大小与DRAM的访问时间成正比,但我们可以通过将对不同存储体的多次访问重叠来减少DRAM的有效访问时间,从而可以减小SRAM的大小。关键的挑战是要保持最坏情况下的带宽保证,我们需要保证在访问过程中不存在库冲突。通过使用类似于现代问题队列的机制对DRAM请求进行重新排序,我们保证了银行冲突。由于我们的设计可能导致数据包缓冲区队列之间的内存碎片,因此我们建议通过重命名队列插槽在多个队列之间共享DRAM空间。据我们所知,本文提出的设计是迄今为止使用商用DRAM最快的缓冲器设计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号